Members
Overall Objectives
Research Program
Application Domains
New Software and Platforms
New Results
Dissemination
Bibliography
XML PDF e-pub
PDF e-Pub


Section: New Results

Legal aspects of systems designed to judicial risk quantification

Participants : Jérôme Dupré

Within the ANJA team systems designed to calculate judicial risk using machine learning technology (AI) have been developed. A former French magistrate is one of the team member who has participated to these researches. In the meantime, he endeavored to contribute to design a legal framework applicable to this activity.

Artificial intelligence (AI), particularly when applied to justice, is liable to encounter rules of law, which are applicable even in the absence of a specific law. As with any new field of activity (eg the Internet), the notion of “legal vacuum” must not be confused with that of “legislative void”. It is therefore necessary to identify how to protect these technologies, what is the responsibilities of each, whether designer or / and user, that could already be applicable.

The two main concerns are relating to property and liability.

1.Regarding property, one can observe that predictive/quantitative solutions based on artificial intelligence result from a combination of technical criterion, databases, algorithms and software, each subject to specific legal protections. These elements may hence be protected by the copyright (for technical criterion); the database law, copyright and unfair competition (for databases); the trade secret (for algorithms), the copyright (for software). It may be questioned whether it would be irrelevant to create, ultimately, a unified legal status specific to this complex reality. But it is probably too early to legislate.

At the heart of the solution is the algorithm, an immaterial element which, in France, is the least well protected by law (it belongs to the domain of “ideas”), justifying its secret nature.

2. Considering liability aspects, we observe that this “black box” - the secret being a consequence of complexity and investments made - may be at the origin of a prejudice, either because of a bad use, or because it is not correctly designed.

The French law offers a range of solutions to the victim, depending on the origin of the damage (see 6.2).

Trust in results is also a factor to be considered. Thus, in the absence of a technical problem peculiar to the solution, misuse by the legal professional providing legal advice may justify, for example, his/her contractual liability. From this standpoint, the reliance granted to the technology and the way it is presented are essential. It justifies a specific attention to the way contracts relating to these services are drafted.

The designer of a defective solution may be required to guarantee against the hidden defects of Article 1641 of the French Civil Code. (When there is no contract, one can also be liable on the grounds of Article 1242 paragraph 1 of the same Code).

Standardization of algorithms, which could be tested by an independent body and subject to secrecy, is also an option, but presents a risk of possible paralysis of a promising market in the field of mathematics.

More generally, it seems necessary to comply with the CNIL (the French Data Protection Authority) provisions relating to personal data (Law No 78-17 of 6 January 1978, spec. article 10, and soon Regulation EU 2016/679 of the European Parliament and of the Council of 27 April 2016 applicable in 2018) and with Privacy Law... A huge amount of data is indeed likely to reveal information not resulting from each data taken separately. But this risk is probably more present in the field of big data than in algorithms, the data used for learning being "dissolved" in the formula.